128 research outputs found
From Internet of Things to Internet of Data Apps
We introduce the Internet of Data Apps (IoDA), representing the next natural
progression of the Internet, Big Data, AI, and the Internet of Things. Despite
advancements in these fields, the full potential of universal data access - the
capability to seamlessly consume and contribute data via data applications -
remains stifled by organizational and technological silos. To address these
constraints, we propose the designs of an IoDA layer borrowing inspirations
from the standard Internet protocols. This layer facilitates the
interconnection of data applications across different devices and domains. This
short paper serves as an invitation to dialogue over this proposal.Comment: 5 pages, 2 figure
Human-Scale Computing: A Case for Progressive Narrow Waist for Internet Applications
In the era where personal devices and applications are pervasive, individuals
are continuously generating and interacting with a vast amount of data. Despite
this, access to and control over such data remains challenging due to its
scattering across various app providers and formats. This paper presents
Human-Scale Computing, a vision and an approach where every individual has
straightforward, unified access to their data across all devices, apps, and
services. Key to this solution is the Human Scale Portal, a progressively
designed intermediary that integrates different applications and service
providers. This design adopts a transitional development and deployment
strategy, involving an initial bootstrapping phase to engage application
providers, an acceleration phase to enhance the convenience of access, and an
eventual solution. We believe that this progressive "narrow waist" design can
bridge the gap between the current state of data access and our envisioned
future of human-scale access.Comment: 6 pages, 1 figur
Not-a-Bot (NAB): Improving Service Availability in the Face of Botnet Attacks
A large fraction of email spam, distributed denial-of-service (DDoS) attacks, and click-fraud on web advertisements are caused by traffic sent from compromised machines that form botnets. This paper posits that by identifying human-generated traffic as such, one can service it with improved reliability or higher priority, mitigating the effects of botnet attacks.
The key challenge is to identify human-generated traffic in the absence of strong unique identities. We develop NAB (``Not-A-Bot''), a system to approximately identify and certify human-generated activity. NAB uses a small trusted software component called an attester, which runs on the client machine with an untrusted OS and applications. The attester tags each request with an attestation if the request is made within a small amount of time of legitimate keyboard or mouse activity. The remote entity serving the request sends the request and attestation to a verifier, which checks the attestation and implements an application-specific policy for attested requests.
Our implementation of the attester is within the Xen hypervisor. By analyzing traces of keyboard and mouse activity from 328 users at Intel, together with adversarial traces of spam, DDoS, and click-fraud activity, we estimate that NAB reduces the amount of spam that currently passes through a tuned spam filter by more than 92%, while not flagging any legitimate email as spam. NAB delivers similar benefits to legitimate requests under DDoS and click-fraud attacks
From Kubernetes to Knactor: A State-Centric Rethink of Service Integration
Microservices are increasingly used in modern applications, leading to a
growing need for effective service integration solutions. However, we argue
that traditional API-centric integration mechanisms (e.g., RPC, REST, and
Pub/Sub) hamper the modularity of microservices. These mechanisms introduce
rigid code-level coupling, scatter integration logic, and hinder visibility
into cross-service state exchanges. Ultimately, these limitations complicate
the maintenance and evolution of microservice-based applications. In response,
we propose a rethinking of service integration and present Knactor, a new
state-centric integration framework to restore the modularity that
microservices were intended to offer. Knactor decouples service integration
from service development, allowing integration to be implemented as explicit
state exchanges among multiple services. Our initial case study suggests that
Knactor simplifies service integration and creates new opportunities for
optimizations
Droplet: Decentralized Authorization for IoT Data Streams
This paper presents Droplet, a decentralized data access control service,
which operates without intermediate trust entities. Droplet enables data owners
to securely and selectively share their encrypted data while guaranteeing data
confidentiality against unauthorized parties. Droplet's contribution lies in
coupling two key ideas: (i) a new cryptographically-enforced access control
scheme for encrypted data streams that enables users to define fine-grained
stream-specific access policies, and (ii) a decentralized authorization service
that handles user-defined access policies. In this paper, we present Droplet's
design, the reference implementation of Droplet, and experimental results of
three case-study apps atop of Droplet: Fitbit activity tracker, Ava health
tracker, and ECOviz smart meter dashboard
E2: a framework for NFV applications
By moving network appliance functionality from proprietary
hardware to software, Network Function Virtualization
promises to bring the advantages of cloud computing to
network packet processing. However, the evolution of cloud
computing (particularly for data analytics) has greatly bene-
fited from application-independent methods for scaling and
placement that achieve high efficiency while relieving programmers
of these burdens. NFV has no such general management
solutions. In this paper, we present a scalable and
application-agnostic scheduling framework for packet processing,
and compare its performance to current approaches
Controlling Parallelism on Multi-core Software Routers
Software routers promise to enable the fast deployment of new, sophisticated kinds of packet processing without the need to buy and deploy expensive new equipment. The challenge is offering such programmability while at the same time achieving a competitive level of performance. Recent work has demonstrated that software routers are capable of high performance, but only for conventional, simple workloads (like packet forwarding and IP routing) and, even that, after careful manual calibration. In contrast, we are interested in achieving high performance in the context of a software router running multiple sophisticated packet-processing applications. In particular: first, we identify the main factors that affect packet-processing performance on a modern multicore general-purpose server---cache misses, cache contention, load-balancing across processing cores; then, we formulate an optimization problem that takes as input a particular server architecture and a packet processing flow, and determines how to parallelize the router's functionality across the available cores so as to maximize its throughput
3PO: Programmed Far-Memory Prefetching for Oblivious Applications
Using memory located on remote machines, or far memory, as a swap space is a
promising approach to meet the increasing memory demands of modern datacenter
applications. Operating systems have long relied on prefetchers to mask the
increased latency of fetching pages from swap space to main memory.
Unfortunately, with traditional prefetching heuristics, performance still
degrades when applications use far memory. In this paper we propose a new
prefetching technique for far-memory applications. We focus our efforts on
memory-intensive, oblivious applications whose memory access patterns are
independent of their inputs, such as matrix multiplication. For this class of
applications we observe that we can perfectly prefetch pages without relying on
heuristics. However, prefetching perfectly without requiring significant
application modifications is challenging.
In this paper we describe the design and implementation of 3PO, a system that
provides pre-planned prefetching for general oblivious applications. We
demonstrate that 3PO can accelerate applications, e.g., running them 30-150%
faster than with Linux's prefetcher with 20% local memory. We also use 3PO to
understand the fundamental software overheads of prefetching in a paging-based
system, and the minimum performance penalty that they impose when we run
applications under constrained local memory.Comment: 14 page
Evaluating the Suitability of Server Network Cards for Software Routers
The advent of multicore CPUs has led to renewed interest in software routers built from commodity PC hardware. However, fully exploiting the parallelism due to multiple cores requires the ability to efficiently parallelize the delivery of packets to cores. I.e., traffic arriving (departing) on an incoming (outgoing) link at a router is inherently serial and hence we need a mechanism that appropriately demultiplexes (multiplexes) traffic between a serial link and a set of cores. Recent efforts point to modern server network interface cards (NICs) as offering the required mux/demux capability through new hardware classification features. However, there has been little evaluation of the extent to which these features match the requirements of software routing from the standpoint of both, performance and functionality. This paper takes a first step towards such an evaluation, comparing a commodity server NIC to both, an idealized "parallel" NIC and a "serial only" NIC. We show that although commodity NICs do improve on serial-only NICs they lag an ideal parallel NIC. We find similar gaps in the classification features these NICs offer. We thus conclude with recommendations for NIC modifications that we believe would improve their suitability for software routers
- …